causal effect inference
Causal Effect Inference with Deep Latent-Variable Models
Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of confounders, factors that affect both an intervention and its outcome. A carefully designed observational study attempts to measure all important confounders. However, even if one does not have direct access to all confounders, there may exist noisy and uncertain measurement of proxies for confounders. We build on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect. Our method is based on Variational Autoencoders (VAE) which follow the causal structure of inference with proxies. We show our method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects.
Reviews: Adapting Neural Networks for the Estimation of Treatment Effects
The paper addresses the problem of inferring causal effects using observational data, under the "no-hidden confounders" scenario. Recently there has been much interest in the problem from the machine learning community, including several papers proposing neural net architectures tailored for this problem. This paper proposes a new regularization scheme for this task. The idea is inspired by TMLE, a well known method for doubly-robust estimation of treatment effects. However, TMLE is only an inspiration - the regularization scheme and resulting architecture are distinct and novel.
Reviews: Causal Effect Inference with Deep Latent-Variable Models
However, five other causal diagrams can also be considered with confounder proxies (see Figure 1 in [Miao et al., 2016]). It would be interesting to check whether the proposed method can be applied to all these causal diagrams or there are some limitations. For instance, why covariates of x_i are independent given z_i? Is it restricting the expressive power of this method? For instance, in Experimental Results section, it is assumed that z is a 20-dimensional variable.
Causal Effect Inference with Deep Latent-Variable Models
Louizos, Christos, Shalit, Uri, Mooij, Joris M., Sontag, David, Zemel, Richard, Welling, Max
Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of confounders, factors that affect both an intervention and its outcome. A carefully designed observational study attempts to measure all important confounders. However, even if one does not have direct access to all confounders, there may exist noisy and uncertain measurement of proxies for confounders. We build on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect. Our method is based on Variational Autoencoders (VAE) which follow the causal structure of inference with proxies.
Adversarial Balancing-based Representation Learning for Causal Effect Inference with Observational Data
Du, Xin, Sun, Lei, Duivesteijn, Wouter, Nikolaev, Alexander, Pechenizkiy, Mykola
Learning causal effects from observational data greatly benefits a variety of domains such as healthcare, education and sociology. For instance, one could estimate the impact of a policy to decrease unemployment rate. The central problem for causal effect inference is dealing with the unobserved counterfactuals and treatment selection bias. The state-of-the-art approaches focus on solving these problems by balancing the treatment and control groups. However, during the learning and balancing process, highly predictive information from the original covariate space might be lost. In order to build more robust estimators, we tackle this information loss problem by presenting a method called Adversarial Balancing-based representation learning for Causal Effect Inference (ABCEI), based on the recent advances in deep learning. ABCEI uses adversarial learning to balance the distributions of treatment and control group in the latent representation space, without any assumption on the form of the treatment selection/assignment function. ABCEI preserves useful information for predicting causal effects under the regularization of a mutual information estimator. We conduct various experiments on several synthetic and real-world datasets. The experimental results show that ABCEI is robust against treatment selection bias, and matches/outperforms the state-of-the-art approaches.
- North America > United States (0.28)
- Europe > Netherlands > North Brabant > Eindhoven (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)